Skip to main content

Last Update: 2025/3/26

LLMVision Speech API

The LLMVision Speech API allows you to convert text to speech using OpenAI's SDK. This document provides an overview of the API endpoints, request parameters, and response structure.

Endpoint

POST https://platform.llmprovider.ai/v1/audio/speech

Request Headers

HeaderValue
AuthorizationBearer YOUR_API_KEY
Content-Typeapplication/json

Request Body

The request body should be a JSON object with the following parameters:

ParameterTypeDescriptionNote
modelstringThe model to use (e.g., SenseTTS-Fusion-20250324).
inputstringThe text to generate audio for. The maximum length is 4096 characters.
voicestringThe voice to use (girl_naisheng, girl_pingjing, girl_yingqior guy_qingshuang).View the full voice list
response_formatstring(Optional) The format of the audio response (mp3, wav, wav_stream ).wav_stream: 流式音频返回.
speednumber(Optional) The speed of the generated audio. Select a value from 0.25 to 4.0. 1.0 is the default.
languagestring(Optional) The language of the input text.
volumenumber(Optional) The volume of the generated audio. Select a value from 0.0 to 1.0. 1.0 is the default.
pitchnumber(Optional) The pitch of the generated audio. Select a value from -1.0 to 1.0. 0.0 is the default.
streambool(Optional) Whether to return the audio as a stream. Default is false.
reference_voice_wavstring(Optional) The file path of the reference WAV audio.参考音频
timber_weightsmap[string]float(Optional) The file paths and corresponding weights of WAV audio generated by the Sovits model.(must be = 1)融合音频

Example Request

{
"model": "SenseTTS-Fusion-20250324",
"input": "人之初,性本善",
"voice": "guy_shuaiqi"
}

Response

The API returns an audio file in the requested format.

Example Request

curl -X POST https://platform.llmprovider.ai/v1/audio/speech \
-H "Authorization: Bearer $YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "SenseTTS-Fusion-20250324",
"input": "Hello, how are you today?",
"voice": "girl_naisheng",
}' \
--output speech.mp3

For any questions or further assistance, please contact us at [email protected].